Goto

Collaborating Authors

 autonomy system


An integrated process for design and control of lunar robotics using AI and simulation

Lindmark, Daniel, Andersson, Jonas, Bodin, Kenneth, Bodin, Tora, Börjesson, Hugo, Nordfeldth, Fredrik, Servin, Martin

arXiv.org Artificial Intelligence

We envision an integrated process for developing lunar construction equipment, where physical design and control are explored in parallel. In this paper, we describe a technical framework that supports this process. It relies on OpenPLX, a readable/writable declarative language that links CAD-models and autonomous systems to high-fidelity, real-time 3D simulations of contacting multibody dynamics, machine regolith interaction forces, and non-ideal sensors. To demonstrate its capabilities, we present two case studies, including an autonomous lunar rover that combines a vision-language model for navigation with a reinforcement learning-based control policy for locomotion.


WeHelp: A Shared Autonomy System for Wheelchair Users

Abuduweili, Abulikemu, Wu, Alice, Wei, Tianhao, Zhao, Weiye

arXiv.org Artificial Intelligence

There is a large population of wheelchair users. Most of the wheelchair users need help with daily tasks. However, according to recent reports, their needs are not properly satisfied due to the lack of caregivers. Therefore, in this project, we develop WeHelp, a shared autonomy system aimed for wheelchair users. A robot with a WeHelp system has three modes, following mode, remote control mode and tele-operation mode. In the following mode, the robot follows the wheelchair user automatically via visual tracking. The wheelchair user can ask the robot to follow them from behind, by the left or by the right. When the wheelchair user asks for help, the robot will recognize the command via speech recognition, and then switch to the teleoperation mode or remote control mode. In the teleoperation mode, the wheelchair user takes over the robot with a joy stick and controls the robot to complete some complex tasks for their needs, such as opening doors, moving obstacles on the way, reaching objects on a high shelf or on the low ground, etc. In the remote control mode, a remote assistant takes over the robot and helps the wheelchair user complete some complex tasks for their needs. Our evaluation shows that the pipeline is useful and practical for wheelchair users. Source code and demo of the paper are available at \url{https://github.com/Walleclipse/WeHelp}.


AutoInspect: Towards Long-Term Autonomous Industrial Inspection

Staniaszek, Michal, Flatscher, Tobit, Rowell, Joseph, Niu, Hanlin, Liu, Wenxing, You, Yang, Skilton, Robert, Fallon, Maurice, Hawes, Nick

arXiv.org Artificial Intelligence

We give an overview of AutoInspect, a ROS-based software system for robust and extensible mission-level autonomy. Over the past three years AutoInspect has been deployed in a variety of environments, including at a mine, a chemical plant, a mock oil rig, decommissioned nuclear power plants, and a fusion reactor for durations ranging from hours to weeks. The system combines robust mapping and localisation with graph-based autonomous navigation, mission execution, and scheduling to achieve a complete autonomous inspection system. The time from arrival at a new site to autonomous mission execution can be under an hour. It is deployed on a Boston Dynamics Spot robot using a custom sensing and compute payload called Frontier. In this work we go into detail of the system's performance in two long-term deployments of 49 days at a robotics test facility, and 35 days at the Joint European Torus (JET) fusion reactor in Oxfordshire, UK.


Autonomous Forest Inventory with Legged Robots: System Design and Field Deployment

Mattamala, Matías, Chebrolu, Nived, Casseau, Benoit, Freißmuth, Leonard, Frey, Jonas, Tuna, Turcan, Hutter, Marco, Fallon, Maurice

arXiv.org Artificial Intelligence

We present a solution for autonomous forest inventory with a legged robotic platform. Compared to their wheeled and aerial counterparts, legged platforms offer an attractive balance of endurance and low soil impact for forest applications. In this paper, we present the complete system architecture of our forest inventory solution which includes state estimation, navigation, mission planning, and real-time tree segmentation and trait estimation. We present preliminary results for three campaigns in forests in Finland and the UK and summarize the main outcomes, lessons, and challenges. Our UK experiment at the Forest of Dean with the ANYmal D legged platform, achieved an autonomous survey of a 0.96 hectare plot in 20 min, identifying over 100 trees with typical DBH accuracy of 2 cm.


Autonomous Overhead Powerline Recharging for Uninterrupted Drone Operations

Hoang, Viet Duong, Nyboe, Frederik Falk, Malle, Nicolaj Haarhøj, Ebeid, Emad

arXiv.org Artificial Intelligence

We present a fully autonomous self-recharging drone system capable of long-duration sustained operations near powerlines. The drone is equipped with a robust onboard perception and navigation system that enables it to locate powerlines and approach them for landing. A passively actuated gripping mechanism grasps the powerline cable during landing after which a control circuit regulates the magnetic field inside a split-core current transformer to provide sufficient holding force as well as battery recharging. The system is evaluated in an active outdoor three-phase powerline environment. We demonstrate multiple contiguous hours of fully autonomous uninterrupted drone operations composed of several cycles of flying, landing, recharging, and takeoff, validating the capability of extended, essentially unlimited, operational endurance.


Adv3D: Generating Safety-Critical 3D Objects through Closed-Loop Simulation

Sarva, Jay, Wang, Jingkang, Tu, James, Xiong, Yuwen, Manivasagam, Sivabalan, Urtasun, Raquel

arXiv.org Artificial Intelligence

Self-driving vehicles (SDVs) must be rigorously tested on a wide range of scenarios to ensure safe deployment. The industry typically relies on closed-loop simulation to evaluate how the SDV interacts on a corpus of synthetic and real scenarios and verify it performs properly. However, they primarily only test the system's motion planning module, and only consider behavior variations. It is key to evaluate the full autonomy system in closed-loop, and to understand how variations in sensor data based on scene appearance, such as the shape of actors, affect system performance. In this paper, we propose a framework, Adv3D, that takes real world scenarios and performs closed-loop sensor simulation to evaluate autonomy performance, and finds vehicle shapes that make the scenario more challenging, resulting in autonomy failures and uncomfortable SDV maneuvers. Unlike prior works that add contrived adversarial shapes to vehicle roof-tops or roadside to harm perception only, we optimize a low-dimensional shape representation to modify the vehicle shape itself in a realistic manner to degrade autonomy performance (e.g., perception, prediction, and motion planning). Moreover, we find that the shape variations found with Adv3D optimized in closed-loop are much more effective than those in open-loop, demonstrating the importance of finding scene appearance variations that affect autonomy in the interactive setting.


Global Big Data Conference

#artificialintelligence

It took Odysseus ten years to find his way home from the Trojan Wars but a modern day odyssey about to launch involves an autonomous tug boat that will find its own way on a 1,000 mile journey expected to take just a couple of weeks. The project, named Machine Odyssey in a tribute to Homer's epic poem, will take a sea going tug built by Dutch shipbuilders Damen Shipyards from Hamburg, Germany around Denmark. In keeping with the theme, the tug is christened the Nellie Bly, in homage to the American journalist, industrialist, inventor, and charity worker who was widely known for her bold and record-breaking solo trip around the world in 72 days. At the helm won't be an ancient mariner, but rather the ultra modern SM300 autonomy system created by Boston maritime tech firm Sea Machines. "We recognize in today's day and age the effort on a vessel where a lot of it is still very manual today, it's still staring out those windows, it's still manual driving," said Michael Johnson, Sea Machines CEO. "The auto pilots we have are very single sensor with not a lot of feedback. A small crew will be on board to maintain the ship when the voyage is scheduled to begin Oct. 1 and the Nellie Bly's progress will be monitored and commanded back in Boston at Sea Machine's headquarters. But Johnson stresses, that's not the same as operating the ship by remote control. "The goal is 99% of the effort is taken by the autonomy system.


AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles

Wang, Jingkang, Pun, Ava, Tu, James, Manivasagam, Sivabalan, Sadat, Abbas, Casas, Sergio, Ren, Mengye, Urtasun, Raquel

arXiv.org Artificial Intelligence

As self-driving systems become better, simulating scenarios where the autonomy stack is likely to fail becomes of key importance. Traditionally, those scenarios are generated for a few scenes with respect to the planning module that takes ground-truth actor states as input. This does not scale and cannot identify all possible autonomy failures, such as perception failures due to occlusion. In this paper, we propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system. Given an initial traffic scenario, AdvSim modifies the actors' trajectories in a physically plausible manner and updates the LiDAR sensor data to create realistic observations of the perturbed world. Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack. Our experiments show that our approach is general and can identify thousands of semantically meaningful safety-critical scenarios for a wide range of modern self-driving systems. Furthermore, we show that the robustness and safety of these autonomy systems can be further improved by training them with scenarios generated by AdvSim.

  Country:
  Genre: Research Report (1.00)
  Industry: Transportation > Ground > Road (0.46)

Best of arXiv.org for AI, Machine Learning, and Deep Learning – January 2019 - insideBIGDATA

#artificialintelligence

Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. We hope to save you some time by picking out articles that represent the most promise for the typical data scientist. The articles listed below represent a fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Especially relevant articles are marked with a "thumbs up" icon.


Skydio Announces SDK to Make World's Cleverest Drone Even Cleverer

IEEE Spectrum Robotics

Skydio blew our minds when they announced the R1 back in February--it's by far the smartest, most autonomous consumer camera drone we've ever seen. The company promised that they'd keep on making the R1 even more capable, and today they're announcing a slew of upgrades, including a new software development kit (SDK) that lets you leverage the R1's obstacle-dodging cleverness in any custom application you can dream up. The Skydio R1 is amazing, and you should read our February article about it, but in a nutshell, it's a drone that uses an array of 12 cameras to dynamically detect and avoid obstacles while it tracks you and films what you're doing. This means that it can follow someone riding a mountain bike through a forest, dodging trees and branches and keeping them in frame the whole time. It's basically the kind of capability that every single company working on drone delivery has implicitly promised and so far failed to deliver, and now you can spend some cash (okay, kind of a lot of cash) and play with it yourself.